Processed 2 GNN file(s) from directory: src/gnn/examples
Search pattern used: **/*.md
src/gnn/examples/pymdp_pomdp_agent.mdsrc/gnn/examples/rxinfer_multiagent_gnn.mdPath: src/gnn/examples/pymdp_pomdp_agent.md
Path: src/gnn/examples/rxinfer_multiagent_gnn.md
Checked 2 files, 2 valid, 0 invalid
Analyzed 2 files Average Memory Usage: 0.50 KB Average Inference Time: 218.62 units Average Storage: 5.29 KB
Path: src/gnn/examples/pymdp_pomdp_agent.md Memory Estimate: 0.48 KB Inference Estimate: 154.07 units Storage Estimate: 3.83 KB
Path: src/gnn/examples/rxinfer_multiagent_gnn.md Memory Estimate: 0.52 KB Inference Estimate: 283.16 units Storage Estimate: 6.76 KB
t to one at t+1). Indicates the degree to which the model's behavior depends on past states or sequences.View standalone: resource_report_detailed.html
{
"/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md": {
"file": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md",
"model_name": "Multifactor PyMDP Agent v1",
"memory_estimate": 0.484375,
"inference_estimate": 154.06988264859797,
"storage_estimate": 3.82846875,
"flops_estimate": {
"total_flops": 1050.0,
"matrix_operations": 0,
"element_operations": 0,
"nonlinear_operations": 0
},
"inference_time_estimate": {
"cpu_time_seconds": 2.1e-08,
"cpu_time_ms": 2.1e-05,
"cpu_time_us": 0.020999999999999998
},
"batched_inference_estimate": {
"batch_1": {
"flops": 1050.0,
"time_seconds": 2.1e-08,
"throughput_per_second": 47619047.61904762
},
"batch_8": {
"flops": 6674.971489500035,
"time_seconds": 1.334994297900007e-07,
"throughput_per_second": 59925349.58826627
},
"batch_32": {
"flops": 25518.25782075925,
"time_seconds": 5.10365156415185e-07,
"throughput_per_second": 62700205.13306323
},
"batch_128": {
"flops": 99830.77636640746,
"time_seconds": 1.9966155273281492e-06,
"throughput_per_second": 64108486.710652955
},
"batch_512": {
"flops": 394234.3967437306,
"time_seconds": 7.884687934874611e-06,
"throughput_per_second": 64935987.85760216
}
},
"model_overhead": {
"compilation_ms": 79,
"optimization_ms": 240.5,
"memory_overhead_kb": 2.572265625
},
"complexity": {
"state_space_complexity": 6.965784284662087,
"graph_density": 0.004761904761904762,
"avg_in_degree": 1.0,
"avg_out_degree": 1.0,
"max_in_degree": 1,
"max_out_degree": 1,
"cyclic_complexity": 0,
"temporal_complexity": 0.0,
"equation_complexity": 8.76,
"overall_complexity": 8.741273094711996,
"variable_count": 21,
"edge_count": 2,
"total_state_space_dim": 124,
"max_variable_dim": 27
},
"model_info": {
"variables_count": 21,
"edges_count": 2,
"time_spec": "Dynamic",
"equation_count": 5
}
},
"/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md": {
"file": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md",
"model_name": "Multi-agent Trajectory Planning",
"memory_estimate": 0.5166015625,
"inference_estimate": 283.1611446514433,
"storage_estimate": 6.7573515625,
"flops_estimate": {
"total_flops": 20.0,
"matrix_operations": 0,
"element_operations": 8,
"nonlinear_operations": 0
},
"inference_time_estimate": {
"cpu_time_seconds": 4e-10,
"cpu_time_ms": 4.0000000000000003e-07,
"cpu_time_us": 0.0004
},
"batched_inference_estimate": {
"batch_1": {
"flops": 20.0,
"time_seconds": 4e-10,
"throughput_per_second": 2500000000.0
},
"batch_8": {
"flops": 127.14231408571496,
"time_seconds": 2.5428462817142993e-09,
"throughput_per_second": 3146080853.383979
},
"batch_32": {
"flops": 486.0620537287476,
"time_seconds": 9.721241074574952e-09,
"throughput_per_second": 3291760769.48582
},
"batch_128": {
"flops": 1901.5385974553803,
"time_seconds": 3.8030771949107605e-08,
"throughput_per_second": 3365695552.30928
},
"batch_512": {
"flops": 7509.226604642487,
"time_seconds": 1.5018453209284973e-07,
"throughput_per_second": 3409139362.5241137
}
},
"model_overhead": {
"compilation_ms": 206,
"optimization_ms": 1820.0,
"memory_overhead_kb": 5.423828125
},
"complexity": {
"state_space_complexity": 6.820178962415188,
"graph_density": 0.0002824858757062147,
"avg_in_degree": 1.0,
"avg_out_degree": 1.0,
"max_in_degree": 1,
"max_out_degree": 1,
"cyclic_complexity": 0,
"temporal_complexity": 0.0,
"equation_complexity": 3.2577777777777777,
"overall_complexity": 5.364897390812113,
"variable_count": 60,
"edge_count": 1,
"total_state_space_dim": 112,
"max_variable_dim": 16
},
"model_info": {
"variables_count": 60,
"edges_count": 1,
"time_spec": "Dynamic",
"equation_count": 15
}
}
}resource_data.json🗓️ Generated: 2025-06-06 12:52:19
src/gnn/examplesoutput/gnn_exports{
"file_path": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md",
"name": "Multifactor PyMDP Agent v1",
"metadata": {
"description": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example."
},
"states": [
{
"id": "A_m0",
"dimensions": "3,2,3,type=float",
"original_id": "A_m0"
},
{
"id": "A_m1",
"dimensions": "3,2,3,type=float",
"original_id": "A_m1"
},
{
"id": "A_m2",
"dimensions": "3,2,3,type=float",
"original_id": "A_m2"
},
{
"id": "B_f0",
"dimensions": "2,2,1,type=float",
"original_id": "B_f0"
},
{
"id": "B_f1",
"dimensions": "3,3,3,type=float",
"original_id": "B_f1"
},
{
"id": "C_m0",
"dimensions": "3,type=float",
"original_id": "C_m0"
},
{
"id": "C_m1",
"dimensions": "3,type=float",
"original_id": "C_m1"
},
{
"id": "C_m2",
"dimensions": "3,type=float",
"original_id": "C_m2"
},
{
"id": "D_f0",
"dimensions": "2,type=float",
"original_id": "D_f0"
},
{
"id": "D_f1",
"dimensions": "3,type=float",
"original_id": "D_f1"
},
{
"id": "s_f0",
"dimensions": "2,1,type=float",
"original_id": "s_f0"
},
{
"id": "s_f1",
"dimensions": "3,1,type=float",
"original_id": "s_f1"
},
{
"id": "s_prime_f0",
"dimensions": "2,1,type=float",
"original_id": "s_prime_f0"
},
{
"id": "s_prime_f1",
"dimensions": "3,1,type=float",
"original_id": "s_prime_f1"
},
{
"id": "o_m0",
"dimensions": "3,1,type=float",
"original_id": "o_m0"
},
{
"id": "o_m1",
"dimensions": "3,1,type=float",
"original_id": "o_m1"
},
{
"id": "o_m2",
"dimensions": "3,1,type=float",
"original_id": "o_m2"
},
{
"id": "u_f1",
"dimensions": "1,type=int",
"original_id": "u_f1"
},
{
"id": "G",
"dimensions": "1,type=float",
"original_id": "G"
},
{
"id": "t",
"dimensions": "1,type=int",
"original_id": "t"
}
],
"parameters": {},
"initial_parameters": {},
"observations": [],
"transitions": [
{
"sources": [
"D_f0",
"D_f1"
],
"operator": "-",
"targets": [
"s_f0",
"s_f1"
],
"attributes": {}
},
{
"sources": [
"s_f0",
"s_f1"
],
"operator": "-",
"targets": [
"A_m0",
"A_m1",
"A_m2"
],
"attributes": {}
},
{
"sources": [
"A_m0",
"A_m1",
"A_m2"
],
"operator": "-",
"targets": [
"o_m0",
"o_m1",
"o_m2"
],
"attributes": {}
},
{
"sources": [
"B_f0",
"B_f1"
],
"operator": "-",
"targets": [
"s_prime_f0",
"s_prime_f1"
],
"attributes": {}
},
{
"sources": [
"C_m0",
"C_m1",
"C_m2"
],
"operator": ">",
"targets": [
"G"
],
"attributes": {}
}
],
"ontology_annotations": {
"A_m0": "LikelihoodMatrixModality0",
"A_m1": "LikelihoodMatrixModality1",
"A_m2": "LikelihoodMatrixModality2",
"B_f0": "TransitionMatrixFactor0",
"B_f1": "TransitionMatrixFactor1",
"C_m0": "LogPreferenceVectorModality0",
"C_m1": "LogPreferenceVectorModality1",
"C_m2": "LogPreferenceVectorModality2",
"D_f0": "PriorOverHiddenStatesFactor0",
"D_f1": "PriorOverHiddenStatesFactor1",
"s_f0": "HiddenStateFactor0",
"s_f1": "HiddenStateFactor1",
"s_prime_f0": "NextHiddenStateFactor0",
"s_prime_f1": "NextHiddenStateFactor1",
"o_m0": "ObservationModality0",
"o_m1": "ObservationModality1",
"o_m2": "ObservationModality2",
"\u03c0_f1": "PolicyVectorFactor1 # Distribution over actions for factor 1",
"u_f1": "ActionFactor1 # Chosen action for factor 1",
"G": "ExpectedFreeEnergy"
},
"equations_text": "",
"time_info": {
"DiscreteTime": "t",
"ModelTimeHorizon": "Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon."
},
"footer_text": "",
"signature": {},
"raw_sections": {
"GNNSection": "MultifactorPyMDPAgent",
"GNNVersionAndFlags": "GNN v1",
"ModelName": "Multifactor PyMDP Agent v1",
"ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
"StateSpaceBlock": "# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]\nA_m0[3,2,3,type=float] # Likelihood for modality 0 (\"state_observation\")\nA_m1[3,2,3,type=float] # Likelihood for modality 1 (\"reward\")\nA_m2[3,2,3,type=float] # Likelihood for modality 2 (\"decision_proprioceptive\")\n\n# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]\nB_f0[2,2,1,type=float] # Transitions for factor 0 (\"reward_level\"), 1 implicit action (uncontrolled)\nB_f1[3,3,3,type=float] # Transitions for factor 1 (\"decision_state\"), 3 actions\n\n# C_vectors are defined per modality: C_m[observation_outcomes]\nC_m0[3,type=float] # Preferences for modality 0\nC_m1[3,type=float] # Preferences for modality 1\nC_m2[3,type=float] # Preferences for modality 2\n\n# D_vectors are defined per hidden state factor: D_f[states]\nD_f0[2,type=float] # Prior for factor 0\nD_f1[3,type=float] # Prior for factor 1\n\n# Hidden States\ns_f0[2,1,type=float] # Hidden state for factor 0 (\"reward_level\")\ns_f1[3,1,type=float] # Hidden state for factor 1 (\"decision_state\")\ns_prime_f0[2,1,type=float] # Next hidden state for factor 0\ns_prime_f1[3,1,type=float] # Next hidden state for factor 1\n\n# Observations\no_m0[3,1,type=float] # Observation for modality 0\no_m1[3,1,type=float] # Observation for modality 1\no_m2[3,1,type=float] # Observation for modality 2\n\n# Policy and Control\n\u03c0_f1[3,type=float] # Policy (distribution over actions) for controllable factor 1\nu_f1[1,type=int] # Action taken for controllable factor 1\nG[1,type=float] # Expected Free Energy (overall, or can be per policy)\nt[1,type=int] # Time step",
"Connections": "(D_f0,D_f1)-(s_f0,s_f1)\n(s_f0,s_f1)-(A_m0,A_m1,A_m2)\n(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)\n(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled\n(B_f0,B_f1)-(s_prime_f0,s_prime_f1)\n(C_m0,C_m1,C_m2)>G\nG>\u03c0_f1\n\u03c0_f1-u_f1\nG=ExpectedFreeEnergy\nt=Time",
"InitialParameterization": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1\n ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n ( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0\n ( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1\n ( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n ( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0\n ( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1\n ( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n ( (0.0),(1.0) ) # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
"InitialParameterization_raw_content": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1\n ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n ( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0\n ( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1\n ( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n ( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0\n ( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1\n ( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n ( (0.0),(1.0) ) # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
"Equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
"Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
"ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1 # Chosen action for factor 1\nG=ExpectedFreeEnergy",
"ModelParameters": "num_hidden_states_factors: [2, 3] # s_f0[2], s_f1[3]\nnum_obs_modalities: [3, 3, 3] # o_m0[3], o_m1[3], o_m2[3]\nnum_control_factors: [1, 3] # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)",
"Footer": "Multifactor PyMDP Agent v1 - GNN Representation",
"Signature": "NA"
},
"other_sections": {},
"gnnsection": {},
"gnnversionandflags": {},
"equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
"ModelParameters": {
"num_hidden_states_factors": "[2, 3]",
"num_obs_modalities": "[3, 3, 3]",
"num_control_factors": "[1, 3]"
},
"num_hidden_states_factors": "[2, 3]",
"num_obs_modalities": "[3, 3, 3]",
"num_control_factors": "[1, 3]",
"footer": "Multifactor PyMDP Agent v1 - GNN Representation"
}pymdp_pomdp_agent.jsonGNN Model Summary: Multifactor PyMDP Agent v1
Source File: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md
Metadata:
description: This model represents a PyMDP agent with multiple observation modalities and hidden state factors.
- Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes)
- Hidden state factors: "reward_level" (2 states), "decision_state" (3 states)
- Control: "decision_state" factor is controllable with 3 possible actions.
The parameterization is derived from a PyMDP Python script example.
States (20):
- ID: A_m0 (dimensions=3,2,3,type=float, original_id=A_m0)
- ID: A_m1 (dimensions=3,2,3,type=float, original_id=A_m1)
- ID: A_m2 (dimensions=3,2,3,type=float, original_id=A_m2)
- ID: B_f0 (dimensions=2,2,1,type=float, original_id=B_f0)
- ID: B_f1 (dimensions=3,3,3,type=float, original_id=B_f1)
- ID: C_m0 (dimensions=3,type=float, original_id=C_m0)
- ID: C_m1 (dimensions=3,type=float, original_id=C_m1)
- ID: C_m2 (dimensions=3,type=float, original_id=C_m2)
- ID: D_f0 (dimensions=2,type=float, original_id=D_f0)
- ID: D_f1 (dimensions=3,type=float, original_id=D_f1)
- ID: s_f0 (dimensions=2,1,type=float, original_id=s_f0)
- ID: s_f1 (dimensions=3,1,type=float, original_id=s_f1)
- ID: s_prime_f0 (dimensions=2,1,type=float, original_id=s_prime_f0)
- ID: s_prime_f1 (dimensions=3,1,type=float, original_id=s_prime_f1)
- ID: o_m0 (dimensions=3,1,type=float, original_id=o_m0)
- ID: o_m1 (dimensions=3,1,type=float, original_id=o_m1)
- ID: o_m2 (dimensions=3,1,type=float, original_id=o_m2)
- ID: u_f1 (dimensions=1,type=int, original_id=u_f1)
- ID: G (dimensions=1,type=float, original_id=G)
- ID: t (dimensions=1,type=int, original_id=t)
Initial Parameters (0):
General Parameters (0):
Observations (0):
Transitions (5):
- None -> None
- None -> None
- None -> None
- None -> None
- None -> None
Ontology Annotations (20):
A_m0 = LikelihoodMatrixModality0
A_m1 = LikelihoodMatrixModality1
A_m2 = LikelihoodMatrixModality2
B_f0 = TransitionMatrixFactor0
B_f1 = TransitionMatrixFactor1
C_m0 = LogPreferenceVectorModality0
C_m1 = LogPreferenceVectorModality1
C_m2 = LogPreferenceVectorModality2
D_f0 = PriorOverHiddenStatesFactor0
D_f1 = PriorOverHiddenStatesFactor1
s_f0 = HiddenStateFactor0
s_f1 = HiddenStateFactor1
s_prime_f0 = NextHiddenStateFactor0
s_prime_f1 = NextHiddenStateFactor1
o_m0 = ObservationModality0
o_m1 = ObservationModality1
o_m2 = ObservationModality2
π_f1 = PolicyVectorFactor1 # Distribution over actions for factor 1
u_f1 = ActionFactor1 # Chosen action for factor 1
G = ExpectedFreeEnergy
pymdp_pomdp_agent.txt{
"file_path": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md",
"name": "Multi-agent Trajectory Planning",
"metadata": {
"description": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles."
},
"states": [
{
"id": "dt",
"dimensions": "1,type=float",
"original_id": "dt"
},
{
"id": "gamma",
"dimensions": "1,type=float",
"original_id": "gamma"
},
{
"id": "nr_steps",
"dimensions": "1,type=int",
"original_id": "nr_steps"
},
{
"id": "nr_iterations",
"dimensions": "1,type=int",
"original_id": "nr_iterations"
},
{
"id": "nr_agents",
"dimensions": "1,type=int",
"original_id": "nr_agents"
},
{
"id": "softmin_temperature",
"dimensions": "1,type=float",
"original_id": "softmin_temperature"
},
{
"id": "intermediate_steps",
"dimensions": "1,type=int",
"original_id": "intermediate_steps"
},
{
"id": "save_intermediates",
"dimensions": "1,type=bool",
"original_id": "save_intermediates"
},
{
"id": "A",
"dimensions": "4,4,type=float",
"original_id": "A"
},
{
"id": "B",
"dimensions": "4,2,type=float",
"original_id": "B"
},
{
"id": "C",
"dimensions": "2,4,type=float",
"original_id": "C"
},
{
"id": "initial_state_variance",
"dimensions": "1,type=float",
"original_id": "initial_state_variance"
},
{
"id": "control_variance",
"dimensions": "1,type=float",
"original_id": "control_variance"
},
{
"id": "goal_constraint_variance",
"dimensions": "1,type=float",
"original_id": "goal_constraint_variance"
},
{
"id": "gamma_shape",
"dimensions": "1,type=float",
"original_id": "gamma_shape"
},
{
"id": "gamma_scale_factor",
"dimensions": "1,type=float",
"original_id": "gamma_scale_factor"
},
{
"id": "x_limits",
"dimensions": "2,type=float",
"original_id": "x_limits"
},
{
"id": "y_limits",
"dimensions": "2,type=float",
"original_id": "y_limits"
},
{
"id": "fps",
"dimensions": "1,type=int",
"original_id": "fps"
},
{
"id": "heatmap_resolution",
"dimensions": "1,type=int",
"original_id": "heatmap_resolution"
},
{
"id": "plot_width",
"dimensions": "1,type=int",
"original_id": "plot_width"
},
{
"id": "plot_height",
"dimensions": "1,type=int",
"original_id": "plot_height"
},
{
"id": "agent_alpha",
"dimensions": "1,type=float",
"original_id": "agent_alpha"
},
{
"id": "target_alpha",
"dimensions": "1,type=float",
"original_id": "target_alpha"
},
{
"id": "color_palette",
"dimensions": "1,type=string",
"original_id": "color_palette"
},
{
"id": "door_obstacle_center_1",
"dimensions": "2,type=float",
"original_id": "door_obstacle_center_1"
},
{
"id": "door_obstacle_size_1",
"dimensions": "2,type=float",
"original_id": "door_obstacle_size_1"
},
{
"id": "door_obstacle_center_2",
"dimensions": "2,type=float",
"original_id": "door_obstacle_center_2"
},
{
"id": "door_obstacle_size_2",
"dimensions": "2,type=float",
"original_id": "door_obstacle_size_2"
},
{
"id": "wall_obstacle_center",
"dimensions": "2,type=float",
"original_id": "wall_obstacle_center"
},
{
"id": "wall_obstacle_size",
"dimensions": "2,type=float",
"original_id": "wall_obstacle_size"
},
{
"id": "combined_obstacle_center_1",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_center_1"
},
{
"id": "combined_obstacle_size_1",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_size_1"
},
{
"id": "combined_obstacle_center_2",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_center_2"
},
{
"id": "combined_obstacle_size_2",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_size_2"
},
{
"id": "combined_obstacle_center_3",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_center_3"
},
{
"id": "combined_obstacle_size_3",
"dimensions": "2,type=float",
"original_id": "combined_obstacle_size_3"
},
{
"id": "agent1_id",
"dimensions": "1,type=int",
"original_id": "agent1_id"
},
{
"id": "agent1_radius",
"dimensions": "1,type=float",
"original_id": "agent1_radius"
},
{
"id": "agent1_initial_position",
"dimensions": "2,type=float",
"original_id": "agent1_initial_position"
},
{
"id": "agent1_target_position",
"dimensions": "2,type=float",
"original_id": "agent1_target_position"
},
{
"id": "agent2_id",
"dimensions": "1,type=int",
"original_id": "agent2_id"
},
{
"id": "agent2_radius",
"dimensions": "1,type=float",
"original_id": "agent2_radius"
},
{
"id": "agent2_initial_position",
"dimensions": "2,type=float",
"original_id": "agent2_initial_position"
},
{
"id": "agent2_target_position",
"dimensions": "2,type=float",
"original_id": "agent2_target_position"
},
{
"id": "agent3_id",
"dimensions": "1,type=int",
"original_id": "agent3_id"
},
{
"id": "agent3_radius",
"dimensions": "1,type=float",
"original_id": "agent3_radius"
},
{
"id": "agent3_initial_position",
"dimensions": "2,type=float",
"original_id": "agent3_initial_position"
},
{
"id": "agent3_target_position",
"dimensions": "2,type=float",
"original_id": "agent3_target_position"
},
{
"id": "agent4_id",
"dimensions": "1,type=int",
"original_id": "agent4_id"
},
{
"id": "agent4_radius",
"dimensions": "1,type=float",
"original_id": "agent4_radius"
},
{
"id": "agent4_initial_position",
"dimensions": "2,type=float",
"original_id": "agent4_initial_position"
},
{
"id": "agent4_target_position",
"dimensions": "2,type=float",
"original_id": "agent4_target_position"
},
{
"id": "experiment_seeds",
"dimensions": "2,type=int",
"original_id": "experiment_seeds"
},
{
"id": "results_dir",
"dimensions": "1,type=string",
"original_id": "results_dir"
},
{
"id": "animation_template",
"dimensions": "1,type=string",
"original_id": "animation_template"
},
{
"id": "control_vis_filename",
"dimensions": "1,type=string",
"original_id": "control_vis_filename"
},
{
"id": "obstacle_distance_filename",
"dimensions": "1,type=string",
"original_id": "obstacle_distance_filename"
},
{
"id": "path_uncertainty_filename",
"dimensions": "1,type=string",
"original_id": "path_uncertainty_filename"
},
{
"id": "convergence_filename",
"dimensions": "1,type=string",
"original_id": "convergence_filename"
}
],
"parameters": {},
"initial_parameters": {},
"observations": [],
"transitions": [
{
"sources": [
"dt"
],
"operator": ">",
"targets": [
"A"
],
"attributes": {}
},
{
"sources": [
"A",
"B",
"C"
],
"operator": ">",
"targets": [
"state_space_model"
],
"attributes": {}
},
{
"sources": [
"state_space_model",
"initial_state_variance",
"control_variance"
],
"operator": ">",
"targets": [
"agent_trajectories"
],
"attributes": {}
},
{
"sources": [
"agent_trajectories",
"goal_constraint_variance"
],
"operator": ">",
"targets": [
"goal_directed_behavior"
],
"attributes": {}
},
{
"sources": [
"agent_trajectories",
"gamma",
"gamma_shape",
"gamma_scale_factor"
],
"operator": ">",
"targets": [
"obstacle_avoidance"
],
"attributes": {}
},
{
"sources": [
"agent_trajectories",
"nr_agents"
],
"operator": ">",
"targets": [
"collision_avoidance"
],
"attributes": {}
},
{
"sources": [
"goal_directed_behavior",
"obstacle_avoidance",
"collision_avoidance"
],
"operator": ">",
"targets": [
"planning_system"
],
"attributes": {}
}
],
"ontology_annotations": {
"dt": "TimeStep",
"gamma": "ConstraintParameter",
"nr_steps": "TrajectoryLength",
"nr_iterations": "InferenceIterations",
"nr_agents": "NumberOfAgents",
"softmin_temperature": "SoftminTemperature",
"A": "StateTransitionMatrix",
"B": "ControlInputMatrix",
"C": "ObservationMatrix",
"initial_state_variance": "InitialStateVariance",
"control_variance": "ControlVariance",
"goal_constraint_variance": "GoalConstraintVariance"
},
"equations_text": "",
"time_info": {
"ModelTimeHorizon": "nr_steps"
},
"footer_text": "",
"signature": {
"Creator": "AI Assistant for GNN",
"Date": "2024-07-27",
"Status": "Example for RxInfer.jl multi-agent trajectory planning"
},
"raw_sections": {
"GNNSection": "RxInferMultiAgentTrajectoryPlanning",
"GNNVersionAndFlags": "GNN v1",
"ModelName": "Multi-agent Trajectory Planning",
"ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
"StateSpaceBlock": "# Model parameters\ndt[1,type=float] # Time step for the state space model\ngamma[1,type=float] # Constraint parameter for the Halfspace node\nnr_steps[1,type=int] # Number of time steps in the trajectory\nnr_iterations[1,type=int] # Number of inference iterations\nnr_agents[1,type=int] # Number of agents in the simulation\nsoftmin_temperature[1,type=float] # Temperature parameter for the softmin function\nintermediate_steps[1,type=int] # Intermediate results saving interval\nsave_intermediates[1,type=bool] # Whether to save intermediate results\n\n# State space matrices\nA[4,4,type=float] # State transition matrix\nB[4,2,type=float] # Control input matrix\nC[2,4,type=float] # Observation matrix\n\n# Prior distributions\ninitial_state_variance[1,type=float] # Prior on initial state\ncontrol_variance[1,type=float] # Prior on control inputs\ngoal_constraint_variance[1,type=float] # Goal constraints variance\ngamma_shape[1,type=float] # Parameters for GammaShapeRate prior\ngamma_scale_factor[1,type=float] # Parameters for GammaShapeRate prior\n\n# Visualization parameters\nx_limits[2,type=float] # Plot boundaries (x-axis)\ny_limits[2,type=float] # Plot boundaries (y-axis)\nfps[1,type=int] # Animation frames per second\nheatmap_resolution[1,type=int] # Heatmap resolution\nplot_width[1,type=int] # Plot width\nplot_height[1,type=int] # Plot height\nagent_alpha[1,type=float] # Visualization alpha for agents\ntarget_alpha[1,type=float] # Visualization alpha for targets\ncolor_palette[1,type=string] # Color palette for visualization\n\n# Environment definitions\ndoor_obstacle_center_1[2,type=float] # Door environment, obstacle 1 center\ndoor_obstacle_size_1[2,type=float] # Door environment, obstacle 1 size\ndoor_obstacle_center_2[2,type=float] # Door environment, obstacle 2 center\ndoor_obstacle_size_2[2,type=float] # Door environment, obstacle 2 size\n\nwall_obstacle_center[2,type=float] # Wall environment, obstacle center\nwall_obstacle_size[2,type=float] # Wall environment, obstacle size\n\ncombined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center\ncombined_obstacle_size_1[2,type=float] # Combined environment, obstacle 1 size\ncombined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center\ncombined_obstacle_size_2[2,type=float] # Combined environment, obstacle 2 size\ncombined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center\ncombined_obstacle_size_3[2,type=float] # Combined environment, obstacle 3 size\n\n# Agent configurations\nagent1_id[1,type=int] # Agent 1 ID\nagent1_radius[1,type=float] # Agent 1 radius\nagent1_initial_position[2,type=float] # Agent 1 initial position\nagent1_target_position[2,type=float] # Agent 1 target position\n\nagent2_id[1,type=int] # Agent 2 ID\nagent2_radius[1,type=float] # Agent 2 radius\nagent2_initial_position[2,type=float] # Agent 2 initial position\nagent2_target_position[2,type=float] # Agent 2 target position\n\nagent3_id[1,type=int] # Agent 3 ID\nagent3_radius[1,type=float] # Agent 3 radius\nagent3_initial_position[2,type=float] # Agent 3 initial position\nagent3_target_position[2,type=float] # Agent 3 target position\n\nagent4_id[1,type=int] # Agent 4 ID\nagent4_radius[1,type=float] # Agent 4 radius\nagent4_initial_position[2,type=float] # Agent 4 initial position\nagent4_target_position[2,type=float] # Agent 4 target position\n\n# Experiment configurations\nexperiment_seeds[2,type=int] # Random seeds for reproducibility\nresults_dir[1,type=string] # Base directory for results\nanimation_template[1,type=string] # Filename template for animations\ncontrol_vis_filename[1,type=string] # Filename for control visualization\nobstacle_distance_filename[1,type=string] # Filename for obstacle distance plot\npath_uncertainty_filename[1,type=string] # Filename for path uncertainty plot\nconvergence_filename[1,type=string] # Filename for convergence plot",
"Connections": "# Model parameters\ndt > A\n(A, B, C) > state_space_model\n\n# Agent trajectories\n(state_space_model, initial_state_variance, control_variance) > agent_trajectories\n\n# Goal constraints\n(agent_trajectories, goal_constraint_variance) > goal_directed_behavior\n\n# Obstacle avoidance\n(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance\n\n# Collision avoidance\n(agent_trajectories, nr_agents) > collision_avoidance\n\n# Complete planning system\n(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system",
"InitialParameterization": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
"InitialParameterization_raw_content": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
"Equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t, w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t, v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
"Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
"ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance",
"ModelParameters": "nr_agents=4\nnr_steps=40\nnr_iterations=350",
"Footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl",
"Signature": "Creator: AI Assistant for GNN\nDate: 2024-07-27\nStatus: Example for RxInfer.jl multi-agent trajectory planning"
},
"other_sections": {},
"gnnsection": {},
"gnnversionandflags": {},
"equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t, w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t, v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
"ModelParameters": {},
"footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl"
}rxinfer_multiagent_gnn.jsonGNN Model Summary: Multi-agent Trajectory Planning
Source File: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md
Metadata:
description: This model represents a multi-agent trajectory planning scenario in RxInfer.jl.
It includes:
- State space model for agents moving in a 2D environment
- Obstacle avoidance constraints
- Goal-directed behavior
- Inter-agent collision avoidance
The model can be used to simulate trajectory planning in various environments with obstacles.
States (60):
- ID: dt (dimensions=1,type=float, original_id=dt)
- ID: gamma (dimensions=1,type=float, original_id=gamma)
- ID: nr_steps (dimensions=1,type=int, original_id=nr_steps)
- ID: nr_iterations (dimensions=1,type=int, original_id=nr_iterations)
- ID: nr_agents (dimensions=1,type=int, original_id=nr_agents)
- ID: softmin_temperature (dimensions=1,type=float, original_id=softmin_temperature)
- ID: intermediate_steps (dimensions=1,type=int, original_id=intermediate_steps)
- ID: save_intermediates (dimensions=1,type=bool, original_id=save_intermediates)
- ID: A (dimensions=4,4,type=float, original_id=A)
- ID: B (dimensions=4,2,type=float, original_id=B)
- ID: C (dimensions=2,4,type=float, original_id=C)
- ID: initial_state_variance (dimensions=1,type=float, original_id=initial_state_variance)
- ID: control_variance (dimensions=1,type=float, original_id=control_variance)
- ID: goal_constraint_variance (dimensions=1,type=float, original_id=goal_constraint_variance)
- ID: gamma_shape (dimensions=1,type=float, original_id=gamma_shape)
- ID: gamma_scale_factor (dimensions=1,type=float, original_id=gamma_scale_factor)
- ID: x_limits (dimensions=2,type=float, original_id=x_limits)
- ID: y_limits (dimensions=2,type=float, original_id=y_limits)
- ID: fps (dimensions=1,type=int, original_id=fps)
- ID: heatmap_resolution (dimensions=1,type=int, original_id=heatmap_resolution)
- ID: plot_width (dimensions=1,type=int, original_id=plot_width)
- ID: plot_height (dimensions=1,type=int, original_id=plot_height)
- ID: agent_alpha (dimensions=1,type=float, original_id=agent_alpha)
- ID: target_alpha (dimensions=1,type=float, original_id=target_alpha)
- ID: color_palette (dimensions=1,type=string, original_id=color_palette)
- ID: door_obstacle_center_1 (dimensions=2,type=float, original_id=door_obstacle_center_1)
- ID: door_obstacle_size_1 (dimensions=2,type=float, original_id=door_obstacle_size_1)
- ID: door_obstacle_center_2 (dimensions=2,type=float, original_id=door_obstacle_center_2)
- ID: door_obstacle_size_2 (dimensions=2,type=float, original_id=door_obstacle_size_2)
- ID: wall_obstacle_center (dimensions=2,type=float, original_id=wall_obstacle_center)
- ID: wall_obstacle_size (dimensions=2,type=float, original_id=wall_obstacle_size)
- ID: combined_obstacle_center_1 (dimensions=2,type=float, original_id=combined_obstacle_center_1)
- ID: combined_obstacle_size_1 (dimensions=2,type=float, original_id=combined_obstacle_size_1)
- ID: combined_obstacle_center_2 (dimensions=2,type=float, original_id=combined_obstacle_center_2)
- ID: combined_obstacle_size_2 (dimensions=2,type=float, original_id=combined_obstacle_size_2)
- ID: combined_obstacle_center_3 (dimensions=2,type=float, original_id=combined_obstacle_center_3)
- ID: combined_obstacle_size_3 (dimensions=2,type=float, original_id=combined_obstacle_size_3)
- ID: agent1_id (dimensions=1,type=int, original_id=agent1_id)
- ID: agent1_radius (dimensions=1,type=float, original_id=agent1_radius)
- ID: agent1_initial_position (dimensions=2,type=float, original_id=agent1_initial_position)
- ID: agent1_target_position (dimensions=2,type=float, original_id=agent1_target_position)
- ID: agent2_id (dimensions=1,type=int, original_id=agent2_id)
- ID: agent2_radius (dimensions=1,type=float, original_id=agent2_radius)
- ID: agent2_initial_position (dimensions=2,type=float, original_id=agent2_initial_position)
- ID: agent2_target_position (dimensions=2,type=float, original_id=agent2_target_position)
- ID: agent3_id (dimensions=1,type=int, original_id=agent3_id)
- ID: agent3_radius (dimensions=1,type=float, original_id=agent3_radius)
- ID: agent3_initial_position (dimensions=2,type=float, original_id=agent3_initial_position)
- ID: agent3_target_position (dimensions=2,type=float, original_id=agent3_target_position)
- ID: agent4_id (dimensions=1,type=int, original_id=agent4_id)
- ID: agent4_radius (dimensions=1,type=float, original_id=agent4_radius)
- ID: agent4_initial_position (dimensions=2,type=float, original_id=agent4_initial_position)
- ID: agent4_target_position (dimensions=2,type=float, original_id=agent4_target_position)
- ID: experiment_seeds (dimensions=2,type=int, original_id=experiment_seeds)
- ID: results_dir (dimensions=1,type=string, original_id=results_dir)
- ID: animation_template (dimensions=1,type=string, original_id=animation_template)
- ID: control_vis_filename (dimensions=1,type=string, original_id=control_vis_filename)
- ID: obstacle_distance_filename (dimensions=1,type=string, original_id=obstacle_distance_filename)
- ID: path_uncertainty_filename (dimensions=1,type=string, original_id=path_uncertainty_filename)
- ID: convergence_filename (dimensions=1,type=string, original_id=convergence_filename)
Initial Parameters (0):
General Parameters (0):
Observations (0):
Transitions (7):
- None -> None
- None -> None
- None -> None
- None -> None
- None -> None
- None -> None
- None -> None
Ontology Annotations (12):
dt = TimeStep
gamma = ConstraintParameter
nr_steps = TrajectoryLength
nr_iterations = InferenceIterations
nr_agents = NumberOfAgents
softmin_temperature = SoftminTemperature
A = StateTransitionMatrix
B = ControlInputMatrix
C = ObservationMatrix
initial_state_variance = InitialStateVariance
... (file truncated, total lines: 103)rxinfer_multiagent_gnn.txt🗓️ Generated: 2025-06-06 12:52:19
Found 2 GNN files for processing:
src/gnn/examples/pymdp_pomdp_agent.mdsrc/gnn/examples/rxinfer_multiagent_gnn.mdPipeline execution data not available.
gnn_processing_step/gnn_type_check/gnn_exports/gnn_examples_visualization/gnn_rendered_simulators/test_reports/--verbose flag for detailed debuggingReport generated by GNN Processing Pipeline Step 5 (Export)
MultifactorPyMDPAgent
GNN v1
Multifactor PyMDP Agent v1
This model represents a PyMDP agent with multiple observation modalities and hidden state factors. - Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes) - Hidden state factors: "reward_level" (2 states), "decision_state" (3 states) - Control: "decision_state" factor is controllable with 3 possible actions. The parameterization is derived from a PyMDP Python script example.
A_m0[3,2,3,type=float] # Likelihood for modality 0 ("state_observation") A_m1[3,2,3,type=float] # Likelihood for modality 1 ("reward") A_m2[3,2,3,type=float] # Likelihood for modality 2 ("decision_proprioceptive")
B_f0[2,2,1,type=float] # Transitions for factor 0 ("reward_level"), 1 implicit action (uncontrolled) B_f1[3,3,3,type=float] # Transitions for factor 1 ("decision_state"), 3 actions
C_m0[3,type=float] # Preferences for modality 0 C_m1[3,type=float] # Preferences for modality 1 C_m2[3,type=float] # Preferences for modality 2
D_f0[2,type=float] # Prior for factor 0 D_f1[3,type=float] # Prior for factor 1
s_f0[2,1,type=float] # Hidden state for factor 0 ("reward_level") s_f1[3,1,type=float] # Hidden state for factor 1 ("decision_state") s_prime_f0[2,1,type=float] # Next hidden state for factor 0 s_prime_f1[3,1,type=float] # Next hidden state for factor 1
o_m0[3,1,type=float] # Observation for modality 0 o_m1[3,1,type=float] # Observation for modality 1 o_m2[3,1,type=float] # Observation for modality 2
π_f1[3,type=float] # Policy (distribution over actions) for controllable factor 1 u_f1[1,type=int] # Action taken for controllable factor 1 G[1,type=float] # Expected Free Energy (overall, or can be per policy) t[1,type=int] # Time step
(D_f0,D_f1)-(s_f0,s_f1) (s_f0,s_f1)-(A_m0,A_m1,A_m2) (A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2) (s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled (B_f0,B_f1)-(s_prime_f0,s_prime_f1) (C_m0,C_m1,C_m2)>G G>π_f1 π_f1-u_f1 G=ExpectedFreeEnergy t=Time
A_m0={ ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1) ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1 ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2 }
A_m1={ ( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0 ( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1 ( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2 }
A_m2={ ( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0 ( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1 ( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2 }
B_f0={ ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0) ( (0.0),(1.0) ) # s_next=1 }
B_f1={ ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ... ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1 ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2 }
C_m0={(0.0,0.0,0.0)}
C_m1={(1.0,-2.0,0.0)}
C_m2={(0.0,0.0,0.0)}
D_f0={(0.5,0.5)}
D_f1={(0.33333,0.33333,0.33333)}
Dynamic DiscreteTime=t ModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.
A_m0=LikelihoodMatrixModality0 A_m1=LikelihoodMatrixModality1 A_m2=LikelihoodMatrixModality2 B_f0=TransitionMatrixFactor0 B_f1=TransitionMatrixFactor1 C_m0=LogPreferenceVectorModality0 C_m1=LogPreferenceVectorModality1 C_m2=LogPreferenceVectorModality2 D_f0=PriorOverHiddenStatesFactor0 D_f1=PriorOverHiddenStatesFactor1 s_f0=HiddenStateFactor0 s_f1=HiddenStateFactor1 s_prime_f0=NextHiddenStateFactor0 s_prime_f1=NextHiddenStateFactor1 o_m0=ObservationModality0 o_m1=ObservationModality1 o_m2=ObservationModality2 π_f1=PolicyVectorFactor1 # Distribution over actions for factor 1 u_f1=ActionFactor1 # Chosen action for factor 1 G=ExpectedFreeEnergy
num_hidden_states_factors: [2, 3] # s_f0[2], s_f1[3] num_obs_modalities: [3, 3, 3] # o_m0[3], o_m1[3], o_m2[3] num_control_factors: [1, 3] # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)
Multifactor PyMDP Agent v1 - GNN Representation
NA \n```\n\n## Parsed Sections
# GNN Example: Multifactor PyMDP Agent
# Format: Markdown representation of a Multifactor PyMDP model in Active Inference format
# Version: 1.0
# This file is machine-readable and attempts to represent a PyMDP agent with multiple observation modalities and hidden state factors.
Multifactor PyMDP Agent v1
MultifactorPyMDPAgent
GNN v1
This model represents a PyMDP agent with multiple observation modalities and hidden state factors.
- Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes)
- Hidden state factors: "reward_level" (2 states), "decision_state" (3 states)
- Control: "decision_state" factor is controllable with 3 possible actions.
The parameterization is derived from a PyMDP Python script example.
# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]
A_m0[3,2,3,type=float] # Likelihood for modality 0 ("state_observation")
A_m1[3,2,3,type=float] # Likelihood for modality 1 ("reward")
A_m2[3,2,3,type=float] # Likelihood for modality 2 ("decision_proprioceptive")
# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]
B_f0[2,2,1,type=float] # Transitions for factor 0 ("reward_level"), 1 implicit action (uncontrolled)
B_f1[3,3,3,type=float] # Transitions for factor 1 ("decision_state"), 3 actions
# C_vectors are defined per modality: C_m[observation_outcomes]
C_m0[3,type=float] # Preferences for modality 0
C_m1[3,type=float] # Preferences for modality 1
C_m2[3,type=float] # Preferences for modality 2
# D_vectors are defined per hidden state factor: D_f[states]
D_f0[2,type=float] # Prior for factor 0
D_f1[3,type=float] # Prior for factor 1
# Hidden States
s_f0[2,1,type=float] # Hidden state for factor 0 ("reward_level")
s_f1[3,1,type=float] # Hidden state for factor 1 ("decision_state")
s_prime_f0[2,1,type=float] # Next hidden state for factor 0
s_prime_f1[3,1,type=float] # Next hidden state for factor 1
# Observations
o_m0[3,1,type=float] # Observation for modality 0
o_m1[3,1,type=float] # Observation for modality 1
o_m2[3,1,type=float] # Observation for modality 2
# Policy and Control
π_f1[3,type=float] # Policy (distribution over actions) for controllable factor 1
u_f1[1,type=int] # Action taken for controllable factor 1
G[1,type=float] # Expected Free Energy (overall, or can be per policy)
t[1,type=int] # Time step
(D_f0,D_f1)-(s_f0,s_f1)
(s_f0,s_f1)-(A_m0,A_m1,A_m2)
(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)
(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled
(B_f0,B_f1)-(s_prime_f0,s_prime_f1)
(C_m0,C_m1,C_m2)>G
G>π_f1
π_f1-u_f1
G=ExpectedFreeEnergy
t=Time
# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]
# A[0][:, :, 0] = np.ones((3,2))/3
# A[0][:, :, 1] = np.ones((3,2))/3
# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)
A_m0={
( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)
( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1
( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2
}
# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3
# A[1][2, :, 0] = [1.0,1.0]
# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]
# A[1][2, :, 2] = [1.0,1.0]
# Others are 0.
A_m1={
( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0
( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1
( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2
}
# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3
# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0
# Others are 0.
A_m2={
( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0
( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1
( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2
}
# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]
# B_f0 = eye(2)
B_f0={
( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)
( (0.0),(1.0) ) # s_next=1
}
# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]
# B_f1[:,:,action_idx] = eye(3) for each action
B_f1={
( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...
( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1
( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2
}
# C_m0: num_obs[0]=3. Defaults to zeros.
C_m0={(0.0,0.0,0.0)}
# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0
C_m1={(1.0,-2.0,0.0)}
# C_m2: num_obs[2]=3. Defaults to zeros.
C_m2={(0.0,0.0,0.0)}
# D_f0: factor 0 (2 states). Uniform prior.
D_f0={(0.5,0.5)}
# D_f1: factor 1 (3 states). Uniform prior.
D_f1={(0.33333,0.33333,0.33333)}
# Standard PyMDP agent equations for state inference (infer_states),
# policy inference (infer_policies), and action sampling (sample_action).
# qs = infer_states(o)
# q_pi, efe = infer_policies()
# action = sample_action()
Dynamic
DiscreteTime=t
ModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.
A_m0=LikelihoodMatrixModality0
A_m1=LikelihoodMatrixModality1
A_m2=LikelihoodMatrixModality2
B_f0=TransitionMatrixFactor0
B_f1=TransitionMatrixFactor1
C_m0=LogPreferenceVectorModality0
C_m1=LogPreferenceVectorModality1
C_m2=LogPreferenceVectorModality2
D_f0=PriorOverHiddenStatesFactor0
D_f1=PriorOverHiddenStatesFactor1
s_f0=HiddenStateFactor0
s_f1=HiddenStateFactor1
s_prime_f0=NextHiddenStateFactor0
s_prime_f1=NextHiddenStateFactor1
o_m0=ObservationModality0
o_m1=ObservationModality1
o_m2=ObservationModality2
π_f1=PolicyVectorFactor1 # Distribution over actions for factor 1
u_f1=ActionFactor1 # Chosen action for factor 1
G=ExpectedFreeEnergy
num_hidden_states_factors: [2, 3] # s_f0[2], s_f1[3]
num_obs_modalities: [3, 3, 3] # o_m0[3], o_m1[3], o_m2[3]
num_control_factors: [1, 3] # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)
Multifactor PyMDP Agent v1 - GNN Representation
NA
{
"_HeaderComments": "# GNN Example: Multifactor PyMDP Agent\n# Format: Markdown representation of a Multifactor PyMDP model in Active Inference format\n# Version: 1.0\n# This file is machine-readable and attempts to represent a PyMDP agent with multiple observation modalities and hidden state factors.",
"ModelName": "Multifactor PyMDP Agent v1",
"GNNSection": "MultifactorPyMDPAgent",
"GNNVersionAndFlags": "GNN v1",
"ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
"StateSpaceBlock": "# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]\nA_m0[3,2,3,type=float] # Likelihood for modality 0 (\"state_observation\")\nA_m1[3,2,3,type=float] # Likelihood for modality 1 (\"reward\")\nA_m2[3,2,3,type=float] # Likelihood for modality 2 (\"decision_proprioceptive\")\n\n# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]\nB_f0[2,2,1,type=float] # Transitions for factor 0 (\"reward_level\"), 1 implicit action (uncontrolled)\nB_f1[3,3,3,type=float] # Transitions for factor 1 (\"decision_state\"), 3 actions\n\n# C_vectors are defined per modality: C_m[observation_outcomes]\nC_m0[3,type=float] # Preferences for modality 0\nC_m1[3,type=float] # Preferences for modality 1\nC_m2[3,type=float] # Preferences for modality 2\n\n# D_vectors are defined per hidden state factor: D_f[states]\nD_f0[2,type=float] # Prior for factor 0\nD_f1[3,type=float] # Prior for factor 1\n\n# Hidden States\ns_f0[2,1,type=float] # Hidden state for factor 0 (\"reward_level\")\ns_f1[3,1,type=float] # Hidden state for factor 1 (\"decision_state\")\ns_prime_f0[2,1,type=float] # Next hidden state for factor 0\ns_prime_f1[3,1,type=float] # Next hidden state for factor 1\n\n# Observations\no_m0[3,1,type=float] # Observation for modality 0\no_m1[3,1,type=float] # Observation for modality 1\no_m2[3,1,type=float] # Observation for modality 2\n\n# Policy and Control\n\u03c0_f1[3,type=float] # Policy (distribution over actions) for controllable factor 1\nu_f1[1,type=int] # Action taken for controllable factor 1\nG[1,type=float] # Expected Free Energy (overall, or can be per policy)\nt[1,type=int] # Time step",
"Connections": "(D_f0,D_f1)-(s_f0,s_f1)\n(s_f0,s_f1)-(A_m0,A_m1,A_m2)\n(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)\n(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled\n(B_f0,B_f1)-(s_prime_f0,s_prime_f1)\n(C_m0,C_m1,C_m2)>G\nG>\u03c0_f1\n\u03c0_f1-u_f1\nG=ExpectedFreeEnergy\nt=Time",
"InitialParameterization": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1\n ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n ( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0\n ( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1\n ( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n ( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0\n ( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1\n ( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n ( (0.0),(1.0) ) # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
"Equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
"Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
"ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1 # Chosen action for factor 1\nG=ExpectedFreeEnergy",
"ModelParameters": "num_hidden_states_factors: [2, 3] # s_f0[2], s_f1[3]\nnum_obs_modalities: [3, 3, 3] # o_m0[3], o_m1[3], o_m2[3]\nnum_control_factors: [1, 3] # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)",
"Footer": "Multifactor PyMDP Agent v1 - GNN Representation",
"Signature": "NA"
}full_model_data.json{
"ModelName": "Multifactor PyMDP Agent v1",
"ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
"GNNVersionAndFlags": "GNN v1",
"Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
"ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1 # Chosen action for factor 1\nG=ExpectedFreeEnergy"
}model_metadata.jsonRxInferMultiAgentTrajectoryPlanning
GNN v1
Multi-agent Trajectory Planning
This model represents a multi-agent trajectory planning scenario in RxInfer.jl. It includes: - State space model for agents moving in a 2D environment - Obstacle avoidance constraints - Goal-directed behavior - Inter-agent collision avoidance The model can be used to simulate trajectory planning in various environments with obstacles.
dt[1,type=float] # Time step for the state space model gamma[1,type=float] # Constraint parameter for the Halfspace node nr_steps[1,type=int] # Number of time steps in the trajectory nr_iterations[1,type=int] # Number of inference iterations nr_agents[1,type=int] # Number of agents in the simulation softmin_temperature[1,type=float] # Temperature parameter for the softmin function intermediate_steps[1,type=int] # Intermediate results saving interval save_intermediates[1,type=bool] # Whether to save intermediate results
A[4,4,type=float] # State transition matrix B[4,2,type=float] # Control input matrix C[2,4,type=float] # Observation matrix
initial_state_variance[1,type=float] # Prior on initial state control_variance[1,type=float] # Prior on control inputs goal_constraint_variance[1,type=float] # Goal constraints variance gamma_shape[1,type=float] # Parameters for GammaShapeRate prior gamma_scale_factor[1,type=float] # Parameters for GammaShapeRate prior
x_limits[2,type=float] # Plot boundaries (x-axis) y_limits[2,type=float] # Plot boundaries (y-axis) fps[1,type=int] # Animation frames per second heatmap_resolution[1,type=int] # Heatmap resolution plot_width[1,type=int] # Plot width plot_height[1,type=int] # Plot height agent_alpha[1,type=float] # Visualization alpha for agents target_alpha[1,type=float] # Visualization alpha for targets color_palette[1,type=string] # Color palette for visualization
door_obstacle_center_1[2,type=float] # Door environment, obstacle 1 center door_obstacle_size_1[2,type=float] # Door environment, obstacle 1 size door_obstacle_center_2[2,type=float] # Door environment, obstacle 2 center door_obstacle_size_2[2,type=float] # Door environment, obstacle 2 size
wall_obstacle_center[2,type=float] # Wall environment, obstacle center wall_obstacle_size[2,type=float] # Wall environment, obstacle size
combined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center combined_obstacle_size_1[2,type=float] # Combined environment, obstacle 1 size combined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center combined_obstacle_size_2[2,type=float] # Combined environment, obstacle 2 size combined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center combined_obstacle_size_3[2,type=float] # Combined environment, obstacle 3 size
agent1_id[1,type=int] # Agent 1 ID agent1_radius[1,type=float] # Agent 1 radius agent1_initial_position[2,type=float] # Agent 1 initial position agent1_target_position[2,type=float] # Agent 1 target position
agent2_id[1,type=int] # Agent 2 ID agent2_radius[1,type=float] # Agent 2 radius agent2_initial_position[2,type=float] # Agent 2 initial position agent2_target_position[2,type=float] # Agent 2 target position
agent3_id[1,type=int] # Agent 3 ID agent3_radius[1,type=float] # Agent 3 radius agent3_initial_position[2,type=float] # Agent 3 initial position agent3_target_position[2,type=float] # Agent 3 target position
agent4_id[1,type=int] # Agent 4 ID agent4_radius[1,type=float] # Agent 4 radius agent4_initial_position[2,type=float] # Agent 4 initial position agent4_target_position[2,type=float] # Agent 4 target position
experiment_seeds[2,type=int] # Random seeds for reproducibility results_dir[1,type=string] # Base directory for results animation_template[1,type=string] # Filename template for animations control_vis_filename[1,type=string] # Filename for control visualization obstacle_distance_filename[1,type=string] # Filename for obstacle distance plot path_uncertainty_filename[1,type=string] # Filename for path uncertainty plot convergence_filename[1,type=string] # Filename for convergence plot
dt > A (A, B, C) > state_space_model
(state_space_model, initial_state_variance, control_variance) > agent_trajectories
(agent_trajectories, goal_constraint_variance) > goal_directed_behavior
(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance
(agent_trajectories, nr_agents) > collision_avoidance
(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system
dt=1.0 gamma=1.0 nr_steps=40 nr_iterations=350 nr_agents=4 softmin_temperature=10.0 intermediate_steps=10 save_intermediates=false
A={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}
B={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}
C={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}
initial_state_variance=100.0 control_variance=0.1 goal_constraint_variance=0.00001 gamma_shape=1.5 gamma_scale_factor=0.5
x_limits={(-20, 20)} y_limits={(-20, 20)} fps=15 heatmap_resolution=100 plot_width=800 plot_height=400 agent_alpha=1.0 target_alpha=0.2 color_palette="tab10"
door_obstacle_center_1={(-40.0, 0.0)} door_obstacle_size_1={(70.0, 5.0)} door_obstacle_center_2={(40.0, 0.0)} door_obstacle_size_2={(70.0, 5.0)}
wall_obstacle_center={(0.0, 0.0)} wall_obstacle_size={(10.0, 5.0)}
combined_obstacle_center_1={(-50.0, 0.0)} combined_obstacle_size_1={(70.0, 2.0)} combined_obstacle_center_2={(50.0, 0.0)} combined_obstacle_size_2={(70.0, 2.0)} combined_obstacle_center_3={(5.0, -1.0)} combined_obstacle_size_3={(3.0, 10.0)}
agent1_id=1 agent1_radius=2.5 agent1_initial_position={(-4.0, 10.0)} agent1_target_position={(-10.0, -10.0)}
agent2_id=2 agent2_radius=1.5 agent2_initial_position={(-10.0, 5.0)} agent2_target_position={(10.0, -15.0)}
agent3_id=3 agent3_radius=1.0 agent3_initial_position={(-15.0, -10.0)} agent3_target_position={(10.0, 10.0)}
agent4_id=4 agent4_radius=2.5 agent4_initial_position={(0.0, -10.0)} agent4_target_position={(-10.0, 15.0)}
experiment_seeds={(42, 123)} results_dir="results" animation_template="{environment}_{seed}.gif" control_vis_filename="control_signals.gif" obstacle_distance_filename="obstacle_distance.png" path_uncertainty_filename="path_uncertainty.png" convergence_filename="convergence.png"
Dynamic DiscreteTime ModelTimeHorizon=nr_steps
dt=TimeStep gamma=ConstraintParameter nr_steps=TrajectoryLength nr_iterations=InferenceIterations nr_agents=NumberOfAgents softmin_temperature=SoftminTemperature A=StateTransitionMatrix B=ControlInputMatrix C=ObservationMatrix initial_state_variance=InitialStateVariance control_variance=ControlVariance goal_constraint_variance=GoalConstraintVariance
nr_agents=4 nr_steps=40 nr_iterations=350
Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl
Creator: AI Assistant for GNN Date: 2024-07-27 Status: Example for RxInfer.jl multi-agent trajectory planning \n```\n\n## Parsed Sections
# GNN Example: RxInfer Multi-agent Trajectory Planning
# Format: Markdown representation of a Multi-agent Trajectory Planning model for RxInfer.jl
# Version: 1.0
# This file is machine-readable and represents the configuration for the RxInfer.jl multi-agent trajectory planning example.
Multi-agent Trajectory Planning
RxInferMultiAgentTrajectoryPlanning
GNN v1
This model represents a multi-agent trajectory planning scenario in RxInfer.jl.
It includes:
- State space model for agents moving in a 2D environment
- Obstacle avoidance constraints
- Goal-directed behavior
- Inter-agent collision avoidance
The model can be used to simulate trajectory planning in various environments with obstacles.
# Model parameters
dt[1,type=float] # Time step for the state space model
gamma[1,type=float] # Constraint parameter for the Halfspace node
nr_steps[1,type=int] # Number of time steps in the trajectory
nr_iterations[1,type=int] # Number of inference iterations
nr_agents[1,type=int] # Number of agents in the simulation
softmin_temperature[1,type=float] # Temperature parameter for the softmin function
intermediate_steps[1,type=int] # Intermediate results saving interval
save_intermediates[1,type=bool] # Whether to save intermediate results
# State space matrices
A[4,4,type=float] # State transition matrix
B[4,2,type=float] # Control input matrix
C[2,4,type=float] # Observation matrix
# Prior distributions
initial_state_variance[1,type=float] # Prior on initial state
control_variance[1,type=float] # Prior on control inputs
goal_constraint_variance[1,type=float] # Goal constraints variance
gamma_shape[1,type=float] # Parameters for GammaShapeRate prior
gamma_scale_factor[1,type=float] # Parameters for GammaShapeRate prior
# Visualization parameters
x_limits[2,type=float] # Plot boundaries (x-axis)
y_limits[2,type=float] # Plot boundaries (y-axis)
fps[1,type=int] # Animation frames per second
heatmap_resolution[1,type=int] # Heatmap resolution
plot_width[1,type=int] # Plot width
plot_height[1,type=int] # Plot height
agent_alpha[1,type=float] # Visualization alpha for agents
target_alpha[1,type=float] # Visualization alpha for targets
color_palette[1,type=string] # Color palette for visualization
# Environment definitions
door_obstacle_center_1[2,type=float] # Door environment, obstacle 1 center
door_obstacle_size_1[2,type=float] # Door environment, obstacle 1 size
door_obstacle_center_2[2,type=float] # Door environment, obstacle 2 center
door_obstacle_size_2[2,type=float] # Door environment, obstacle 2 size
wall_obstacle_center[2,type=float] # Wall environment, obstacle center
wall_obstacle_size[2,type=float] # Wall environment, obstacle size
combined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center
combined_obstacle_size_1[2,type=float] # Combined environment, obstacle 1 size
combined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center
combined_obstacle_size_2[2,type=float] # Combined environment, obstacle 2 size
combined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center
combined_obstacle_size_3[2,type=float] # Combined environment, obstacle 3 size
# Agent configurations
agent1_id[1,type=int] # Agent 1 ID
agent1_radius[1,type=float] # Agent 1 radius
agent1_initial_position[2,type=float] # Agent 1 initial position
agent1_target_position[2,type=float] # Agent 1 target position
agent2_id[1,type=int] # Agent 2 ID
agent2_radius[1,type=float] # Agent 2 radius
agent2_initial_position[2,type=float] # Agent 2 initial position
agent2_target_position[2,type=float] # Agent 2 target position
agent3_id[1,type=int] # Agent 3 ID
agent3_radius[1,type=float] # Agent 3 radius
agent3_initial_position[2,type=float] # Agent 3 initial position
agent3_target_position[2,type=float] # Agent 3 target position
agent4_id[1,type=int] # Agent 4 ID
agent4_radius[1,type=float] # Agent 4 radius
agent4_initial_position[2,type=float] # Agent 4 initial position
agent4_target_position[2,type=float] # Agent 4 target position
# Experiment configurations
experiment_seeds[2,type=int] # Random seeds for reproducibility
results_dir[1,type=string] # Base directory for results
animation_template[1,type=string] # Filename template for animations
control_vis_filename[1,type=string] # Filename for control visualization
obstacle_distance_filename[1,type=string] # Filename for obstacle distance plot
path_uncertainty_filename[1,type=string] # Filename for path uncertainty plot
convergence_filename[1,type=string] # Filename for convergence plot
# Model parameters
dt > A
(A, B, C) > state_space_model
# Agent trajectories
(state_space_model, initial_state_variance, control_variance) > agent_trajectories
# Goal constraints
(agent_trajectories, goal_constraint_variance) > goal_directed_behavior
# Obstacle avoidance
(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance
# Collision avoidance
(agent_trajectories, nr_agents) > collision_avoidance
# Complete planning system
(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system
# Model parameters
dt=1.0
gamma=1.0
nr_steps=40
nr_iterations=350
nr_agents=4
softmin_temperature=10.0
intermediate_steps=10
save_intermediates=false
# State space matrices
# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]
A={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}
# B = [0 0; dt 0; 0 0; 0 dt]
B={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}
# C = [1 0 0 0; 0 0 1 0]
C={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}
# Prior distributions
initial_state_variance=100.0
control_variance=0.1
goal_constraint_variance=0.00001
gamma_shape=1.5
gamma_scale_factor=0.5
# Visualization parameters
x_limits={(-20, 20)}
y_limits={(-20, 20)}
fps=15
heatmap_resolution=100
plot_width=800
plot_height=400
agent_alpha=1.0
target_alpha=0.2
color_palette="tab10"
# Environment definitions
door_obstacle_center_1={(-40.0, 0.0)}
door_obstacle_size_1={(70.0, 5.0)}
door_obstacle_center_2={(40.0, 0.0)}
door_obstacle_size_2={(70.0, 5.0)}
wall_obstacle_center={(0.0, 0.0)}
wall_obstacle_size={(10.0, 5.0)}
combined_obstacle_center_1={(-50.0, 0.0)}
combined_obstacle_size_1={(70.0, 2.0)}
combined_obstacle_center_2={(50.0, 0.0)}
combined_obstacle_size_2={(70.0, 2.0)}
combined_obstacle_center_3={(5.0, -1.0)}
combined_obstacle_size_3={(3.0, 10.0)}
# Agent configurations
agent1_id=1
agent1_radius=2.5
agent1_initial_position={(-4.0, 10.0)}
agent1_target_position={(-10.0, -10.0)}
agent2_id=2
agent2_radius=1.5
agent2_initial_position={(-10.0, 5.0)}
agent2_target_position={(10.0, -15.0)}
agent3_id=3
agent3_radius=1.0
agent3_initial_position={(-15.0, -10.0)}
agent3_target_position={(10.0, 10.0)}
agent4_id=4
agent4_radius=2.5
agent4_initial_position={(0.0, -10.0)}
agent4_target_position={(-10.0, 15.0)}
# Experiment configurations
experiment_seeds={(42, 123)}
results_dir="results"
animation_template="{environment}_{seed}.gif"
control_vis_filename="control_signals.gif"
obstacle_distance_filename="obstacle_distance.png"
path_uncertainty_filename="path_uncertainty.png"
convergence_filename="convergence.png"
# State space model:
# x_{t+1} = A * x_t + B * u_t + w_t, w_t ~ N(0, control_variance)
# y_t = C * x_t + v_t, v_t ~ N(0, observation_variance)
#
# Obstacle avoidance constraint:
# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)
# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle
#
# Goal constraint:
# p(x_T | goal) ~ N(goal, goal_constraint_variance)
# where x_T is the final position
#
# Collision avoidance constraint:
# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)
# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii
Dynamic
DiscreteTime
ModelTimeHorizon=nr_steps
dt=TimeStep
gamma=ConstraintParameter
nr_steps=TrajectoryLength
nr_iterations=InferenceIterations
nr_agents=NumberOfAgents
softmin_temperature=SoftminTemperature
A=StateTransitionMatrix
B=ControlInputMatrix
C=ObservationMatrix
initial_state_variance=InitialStateVariance
control_variance=ControlVariance
goal_constraint_variance=GoalConstraintVariance
nr_agents=4
nr_steps=40
nr_iterations=350
Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl
Creator: AI Assistant for GNN
Date: 2024-07-27
Status: Example for RxInfer.jl multi-agent trajectory planning
{
"_HeaderComments": "# GNN Example: RxInfer Multi-agent Trajectory Planning\n# Format: Markdown representation of a Multi-agent Trajectory Planning model for RxInfer.jl\n# Version: 1.0\n# This file is machine-readable and represents the configuration for the RxInfer.jl multi-agent trajectory planning example.",
"ModelName": "Multi-agent Trajectory Planning",
"GNNSection": "RxInferMultiAgentTrajectoryPlanning",
"GNNVersionAndFlags": "GNN v1",
"ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
"StateSpaceBlock": "# Model parameters\ndt[1,type=float] # Time step for the state space model\ngamma[1,type=float] # Constraint parameter for the Halfspace node\nnr_steps[1,type=int] # Number of time steps in the trajectory\nnr_iterations[1,type=int] # Number of inference iterations\nnr_agents[1,type=int] # Number of agents in the simulation\nsoftmin_temperature[1,type=float] # Temperature parameter for the softmin function\nintermediate_steps[1,type=int] # Intermediate results saving interval\nsave_intermediates[1,type=bool] # Whether to save intermediate results\n\n# State space matrices\nA[4,4,type=float] # State transition matrix\nB[4,2,type=float] # Control input matrix\nC[2,4,type=float] # Observation matrix\n\n# Prior distributions\ninitial_state_variance[1,type=float] # Prior on initial state\ncontrol_variance[1,type=float] # Prior on control inputs\ngoal_constraint_variance[1,type=float] # Goal constraints variance\ngamma_shape[1,type=float] # Parameters for GammaShapeRate prior\ngamma_scale_factor[1,type=float] # Parameters for GammaShapeRate prior\n\n# Visualization parameters\nx_limits[2,type=float] # Plot boundaries (x-axis)\ny_limits[2,type=float] # Plot boundaries (y-axis)\nfps[1,type=int] # Animation frames per second\nheatmap_resolution[1,type=int] # Heatmap resolution\nplot_width[1,type=int] # Plot width\nplot_height[1,type=int] # Plot height\nagent_alpha[1,type=float] # Visualization alpha for agents\ntarget_alpha[1,type=float] # Visualization alpha for targets\ncolor_palette[1,type=string] # Color palette for visualization\n\n# Environment definitions\ndoor_obstacle_center_1[2,type=float] # Door environment, obstacle 1 center\ndoor_obstacle_size_1[2,type=float] # Door environment, obstacle 1 size\ndoor_obstacle_center_2[2,type=float] # Door environment, obstacle 2 center\ndoor_obstacle_size_2[2,type=float] # Door environment, obstacle 2 size\n\nwall_obstacle_center[2,type=float] # Wall environment, obstacle center\nwall_obstacle_size[2,type=float] # Wall environment, obstacle size\n\ncombined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center\ncombined_obstacle_size_1[2,type=float] # Combined environment, obstacle 1 size\ncombined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center\ncombined_obstacle_size_2[2,type=float] # Combined environment, obstacle 2 size\ncombined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center\ncombined_obstacle_size_3[2,type=float] # Combined environment, obstacle 3 size\n\n# Agent configurations\nagent1_id[1,type=int] # Agent 1 ID\nagent1_radius[1,type=float] # Agent 1 radius\nagent1_initial_position[2,type=float] # Agent 1 initial position\nagent1_target_position[2,type=float] # Agent 1 target position\n\nagent2_id[1,type=int] # Agent 2 ID\nagent2_radius[1,type=float] # Agent 2 radius\nagent2_initial_position[2,type=float] # Agent 2 initial position\nagent2_target_position[2,type=float] # Agent 2 target position\n\nagent3_id[1,type=int] # Agent 3 ID\nagent3_radius[1,type=float] # Agent 3 radius\nagent3_initial_position[2,type=float] # Agent 3 initial position\nagent3_target_position[2,type=float] # Agent 3 target position\n\nagent4_id[1,type=int] # Agent 4 ID\nagent4_radius[1,type=float] # Agent 4 radius\nagent4_initial_position[2,type=float] # Agent 4 initial position\nagent4_target_position[2,type=float] # Agent 4 target position\n\n# Experiment configurations\nexperiment_seeds[2,type=int] # Random seeds for reproducibility\nresults_dir[1,type=string] # Base directory for results\nanimation_template[1,type=string] # Filename template for animations\ncontrol_vis_filename[1,type=string] # Filename for control visualization\nobstacle_distance_filename[1,type=string] # Filename for obstacle distance plot\npath_uncertainty_filename[1,type=string] # Filename for path uncertainty plot\nconvergence_filename[1,type=string] # Filename for convergence plot",
"Connections": "# Model parameters\ndt > A\n(A, B, C) > state_space_model\n\n# Agent trajectories\n(state_space_model, initial_state_variance, control_variance) > agent_trajectories\n\n# Goal constraints\n(agent_trajectories, goal_constraint_variance) > goal_directed_behavior\n\n# Obstacle avoidance\n(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance\n\n# Collision avoidance\n(agent_trajectories, nr_agents) > collision_avoidance\n\n# Complete planning system\n(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system",
"InitialParameterization": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
"Equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t, w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t, v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
"Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
"ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance",
"ModelParameters": "nr_agents=4\nnr_steps=40\nnr_iterations=350",
"Footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl",
"Signature": "Creator: AI Assistant for GNN\nDate: 2024-07-27\nStatus: Example for RxInfer.jl multi-agent trajectory planning"
}full_model_data.json{
"ModelName": "Multi-agent Trajectory Planning",
"ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
"GNNVersionAndFlags": "GNN v1",
"Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
"ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance"
}model_metadata.json🗓️ Report Generated: 2025-06-06 12:52:37
MCP Core Directory: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/mcp
Project Source Root (for modules): /home/trim/Documents/GitHub/GeneralizedNotationNotation/src
Output Directory for this report: /home/trim/Documents/GitHub/GeneralizedNotationNotation/output/mcp_processing_step
This section lists all tools currently registered with the MCP system, along with their defining module, arguments, and description.
ensure_directory_existssrc.setup.mcp(directory_path)json
{
"directory_path": {
"type": "string",
"description": "Path of the directory to create if it doesn't exist."
}
}estimate_resources_for_gnn_directorysrc.gnn_type_checker.mcp(dir_path, recursive)json
{
"dir_path": {
"type": "string",
"description": "Path to the directory for GNN resource estimation."
},
"recursive": {
"type": "boolean",
"description": "Search directory recursively. Defaults to False.",
"optional": true
}
}estimate_resources_for_gnn_filesrc.gnn_type_checker.mcp(file_path)json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file for resource estimation."
}
}export_gnn_to_gexfsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_graphmlsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_jsonsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_json_adjacency_listsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_plaintext_dslsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_plaintext_summarysrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_python_picklesrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}export_gnn_to_xmlsrc.export.mcp(gnn_file_path, output_file_path)json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}find_project_gnn_filessrc.setup.mcp(search_directory, recursive)json
{
"search_directory": {
"type": "string",
"description": "The directory to search for GNN (.md) files."
},
"recursive": {
"type": "boolean",
"description": "Set to true to search recursively. Defaults to false.",
"optional": true
}
}get_gnn_documentationsrc.gnn.mcp(doc_name)json
{
"doc_name": {
"type": "string",
"description": "Name of the GNN document (e.g., 'file_structure', 'punctuation')",
"enum": [
"file_structure",
"punctuation"
]
}
}get_standard_output_pathssrc.setup.mcp(base_output_directory)json
{
"base_output_directory": {
"type": "string",
"description": "The base directory where output subdirectories will be managed."
}
}list_render_targetssrc.render.mcp()json
{
"properties": {},
"title": "ListRenderTargetsInput",
"type": "object"
}llm.explain_gnn_filesrc.llm.mcp(file_path_str, aspect_to_explain)json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file."
},
"aspect_to_explain": {
"type": "string",
"description": "(Optional) A specific part or concept within the GNN to focus the explanation on."
}
},
"required": [
"file_path_str"
]
}llm.generate_professional_summarysrc.llm.mcp(file_path_str, experiment_details, target_audience)json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file."
},
"experiment_details": {
"type": "string",
"description": "(Optional) Text describing the experiments conducted with the model, including setup, results, or observations."
},
"target_audience": {
"type": "string",
"description": "(Optional) The intended audience for the summary (e.g., 'fellow researchers', 'project managers'). Default: 'fellow researchers'."
}
},
"required": [
"file_path_str"
]
}llm.summarize_gnn_filesrc.llm.mcp(file_path_str, user_prompt_suffix)json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file (.md, .gnn.md, .json)."
},
"user_prompt_suffix": {
"type": "string",
"description": "(Optional) Additional instructions or focus points for the summary."
}
},
"required": [
"file_path_str"
]
}parse_gnn_filesrc.visualization.mcp(file_path)json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to parse"
}
}render_gnn_specificationsrc.render.mcp(input_data)json
{
"properties": {
"gnn_specification": {
"anyOf": [
{
"additionalProperties": true,
"type": "object"
},
{
"type": "string"
}
],
"description": "The GNN specification itself as a dictionary, or a string URI/path to a GNN spec file (e.g., JSON).",
"title": "Gnn Specification"
},
"target_format": {
"description": "The target format to render the GNN specification to.",
"enum": [
"pymdp",
"rxinfer"
],
"title": "Target Format",
"type": "string"
},
"output_filename_base": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional desired base name for the output file (e.g., 'my_model'). Extension is added automatically. If None, derived from GNN spec name or input file name.",
"title": "Output Filename Base"
},
"render_options": {
"anyOf": [
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional dictionary of specific options for the chosen renderer (e.g., data_bindings for RxInfer).",
"title": "Render Options"
}
},
"required": [
"gnn_specification",
"target_format"
],
"title": "RenderGnnInput",
"type": "object"
}run_gnn_type_checkersrc.tests.mcp(file_path)json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to check"
}
}run_gnn_type_checker_on_directorysrc.tests.mcp(dir_path, report_file)json
{
"dir_path": {
"type": "string",
"description": "Path to directory containing GNN files"
},
"report_file": {
"type": "string",
"description": "Optional path to save the report"
}
}run_gnn_unit_testssrc.tests.mcp()json
No schema provided.sympy_analyze_stabilitysrc.mcp.sympy_mcp(transition_matrices)json
{
"type": "object",
"properties": {
"transition_matrices": {
"type": "array",
"description": "List of transition matrices to analyze"
}
},
"required": [
"transition_matrices"
]
}sympy_cleanupsrc.mcp.sympy_mcp()json
{
"type": "object",
"properties": {}
}sympy_get_latexsrc.mcp.sympy_mcp(expression)json
{
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Expression to convert to LaTeX"
}
},
"required": [
"expression"
]
}sympy_initializesrc.mcp.sympy_mcp(server_executable)json
{
"type": "object",
"properties": {
"server_executable": {
"type": "string",
"description": "Path to SymPy MCP server executable",
"default": null
}
}
}sympy_simplify_expressionsrc.mcp.sympy_mcp(expression)json
{
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to simplify"
}
},
"required": [
"expression"
]
}sympy_solve_equationsrc.mcp.sympy_mcp(equation, variable, domain)json
{
"type": "object",
"properties": {
"equation": {
"type": "string",
"description": "Equation to solve"
},
"variable": {
"type": "string",
"description": "Variable to solve for"
},
"domain": {
"type": "string",
"description": "Solution domain (COMPLEX, REAL, etc.)",
"default": "COMPLEX"
}
},
"required": [
"equation",
"variable"
]
}sympy_validate_equationsrc.mcp.sympy_mcp(equation, context)json
{
"type": "object",
"properties": {
"equation": {
"type": "string",
"description": "Mathematical equation to validate"
},
"context": {
"type": "object",
"description": "GNN context for variable definitions",
"default": {}
}
},
"required": [
"equation"
]
}sympy_validate_matrixsrc.mcp.sympy_mcp(matrix_data, matrix_type)json
{
"type": "object",
"properties": {
"matrix_data": {
"type": "array",
"description": "Matrix data as array of arrays"
},
"matrix_type": {
"type": "string",
"description": "Type of matrix (transition, observation, etc.)",
"default": "transition"
}
},
"required": [
"matrix_data"
]
}type_check_gnn_directorysrc.gnn_type_checker.mcp(dir_path, recursive, output_dir_base, report_md_filename)json
{
"dir_path": {
"type": "string",
"description": "Path to the directory containing GNN files to be type-checked."
},
"recursive": {
"type": "boolean",
"description": "Search directory recursively. Defaults to False.",
"optional": true
},
"output_dir_base": {
"type": "string",
"description": "Optional base directory to save the report and other artifacts (HTML, JSON).",
"optional": true
},
"report_md_filename": {
"type": "string",
"description": "Optional filename for the markdown report (e.g., 'my_report.md'). Defaults to 'type_check_report.md'.",
"optional": true
}
}type_check_gnn_filesrc.gnn_type_checker.mcp(file_path)json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to be type-checked."
}
}visualize_gnn_directorysrc.visualization.mcp(dir_path, output_dir)json
{
"dir_path": {
"type": "string",
"description": "Path to directory containing GNN files"
},
"output_dir": {
"type": "string",
"description": "Optional output directory"
}
}visualize_gnn_filesrc.visualization.mcp(file_path, output_dir)json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to visualize"
},
"output_dir": {
"type": "string",
"description": "Optional output directory"
}
}This section verifies the presence of essential MCP files in the core directory: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/mcp
mcp.py: Found (20304 bytes)meta_mcp.py: Found (4954 bytes)cli.py: Found (4644 bytes)server_stdio.py: Found (7620 bytes)server_http.py: Found (7731 bytes)Status: 5/5 core MCP files found. All core files seem present.
Checking for mcp.py in these subdirectories of /home/trim/Documents/GitHub/GeneralizedNotationNotation/src: ['export', 'gnn', 'gnn_type_checker', 'ontology', 'setup', 'tests', 'visualization', 'llm']
export (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/export)mcp.py Status: Found (7976 bytes)def _handle_export(export_func, gnn_file_path, output_file_path, format_name, requires_nx) (AST parsed) - *"Generic helper to run an export function and handle common exceptions."def export_gnn_to_gexf(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to GEXF graph format (requires NetworkX)."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_gexf_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_graphml(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to GraphML graph format (requires NetworkX)."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_graphml_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_json(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to JSON format."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_json_adjacency_list(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to JSON Adjacency List graph format (requires NetworkX)."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_json_adjacency_list_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_json_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_plaintext_dsl(gnn_file_path, output_file_path) - *Description: "Exports a GNN model back to its GNN DSL plain text format."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_plaintext_dsl_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_plaintext_summary(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to a human-readable plain text summary."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_plaintext_summary_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_python_pickle(gnn_file_path, output_file_path) - *Description: "Serializes a GNN model to a Python pickle file."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_python_pickle_mcp(gnn_file_path, output_file_path) (AST parsed)def export_gnn_to_xml(gnn_file_path, output_file_path) - *Description: "Exports a GNN model to XML format."json
{
"gnn_file_path": {
"type": "string",
"description": "Path to the input GNN Markdown file (.gnn.md)."
},
"output_file_path": {
"type": "string",
"description": "Path where the exported file will be saved."
}
}def export_gnn_to_xml_mcp(gnn_file_path, output_file_path) (AST parsed)def register_tools(mcp_instance) (AST parsed) - *"Registers all GNN export tools with the MCP instance."gnn (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn)mcp.py Status: Found (4122 bytes)def _retrieve_gnn_doc_resource(uri) (AST parsed) - *"Retrieve GNN documentation resource by URI."def get_gnn_documentation(doc_name) - *Description: "Retrieve the content of a GNN core documentation file (e.g., syntax, file structure)."json
{
"doc_name": {
"type": "string",
"description": "Name of the GNN document (e.g., 'file_structure', 'punctuation')",
"enum": [
"file_structure",
"punctuation"
]
}
}def register_tools(mcp_instance) (AST parsed) - *"Register GNN documentation tools and resources with the MCP."gnn_type_checker (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn_type_checker)mcp.py Status: Found (10921 bytes)def estimate_resources_for_gnn_directory(dir_path, recursive) - *Description: "Estimates computational resources for all GNN files in a specified directory."json
{
"dir_path": {
"type": "string",
"description": "Path to the directory for GNN resource estimation."
},
"recursive": {
"type": "boolean",
"description": "Search directory recursively. Defaults to False.",
"optional": true
}
}def estimate_resources_for_gnn_directory_mcp(dir_path, recursive) (AST parsed) - *"Estimate resources for all GNN files in a directory. Exposed via MCP."def estimate_resources_for_gnn_file(file_path) - *Description: "Estimates computational resources (memory, inference, storage) for a GNN model file."json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file for resource estimation."
}
}def estimate_resources_for_gnn_file_mcp(file_path) (AST parsed) - *"Estimate computational resources for a single GNN file. Exposed via MCP."def register_tools(mcp_instance) (AST parsed) - *"Register GNN type checker and resource estimator tools with the MCP."def type_check_gnn_directory(dir_path, recursive, output_dir_base, report_md_filename) - *Description: "Runs the GNN type checker on all GNN files in a specified directory. If output_dir_base is provided, reports are generated."json
{
"dir_path": {
"type": "string",
"description": "Path to the directory containing GNN files to be type-checked."
},
"recursive": {
"type": "boolean",
"description": "Search directory recursively. Defaults to False.",
"optional": true
},
"output_dir_base": {
"type": "string",
"description": "Optional base directory to save the report and other artifacts (HTML, JSON).",
"optional": true
},
"report_md_filename": {
"type": "string",
"description": "Optional filename for the markdown report (e.g., 'my_report.md'). Defaults to 'type_check_report.md'.",
"optional": true
}
}def type_check_gnn_directory_mcp(dir_path, recursive, output_dir_base, report_md_filename) (AST parsed) - *"Run the GNN type checker on all GNN files in a directory. Exposed via MCP."def type_check_gnn_file(file_path) - *Description: "Runs the GNN type checker on a specified GNN model file."json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to be type-checked."
}
}def type_check_gnn_file_mcp(file_path) (AST parsed) - *"Run the GNN type checker on a single GNN file. Exposed via MCP."ontology (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/ontology)mcp.py Status: Found (13473 bytes)def generate_ontology_report_for_file(gnn_file_path, parsed_annotations, validation_results) (AST parsed) - *"Generates a markdown formatted report string for a single GNN file's ontology annotations."def get_mcp_interface() (AST parsed) - *"Returns the MCP interface for the Ontology module."def load_defined_ontology_terms(ontology_terms_path, verbose) (AST parsed) - *"Loads defined ontological terms from a JSON file."def parse_gnn_ontology_section(gnn_file_content, verbose) (AST parsed) - *"Parses the 'ActInfOntologyAnnotation' section from GNN file content."def validate_annotations(parsed_annotations, defined_terms, verbose) (AST parsed) - *"Validates parsed GNN annotations against a set of defined ontological terms."setup (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/setup)mcp.py Status: Found (4257 bytes)def ensure_directory_exists(directory_path) - *Description: "Ensures a directory exists, creating it if necessary. Returns the absolute path."json
{
"directory_path": {
"type": "string",
"description": "Path of the directory to create if it doesn't exist."
}
}def ensure_directory_exists_mcp(directory_path) (AST parsed) - *"Ensure a directory exists, creating it if necessary. Exposed via MCP."def find_project_gnn_files(search_directory, recursive) - *Description: "Finds all GNN (.md) files in a specified directory within the project."json
{
"search_directory": {
"type": "string",
"description": "The directory to search for GNN (.md) files."
},
"recursive": {
"type": "boolean",
"description": "Set to true to search recursively. Defaults to false.",
"optional": true
}
}def find_project_gnn_files_mcp(search_directory, recursive) (AST parsed) - *"Find all GNN (.md) files in a directory. Exposed via MCP."def get_standard_output_paths(base_output_directory) - *Description: "Gets a dictionary of standard output directory paths (e.g., for type_check, visualization), creating them if needed."json
{
"base_output_directory": {
"type": "string",
"description": "The base directory where output subdirectories will be managed."
}
}def get_standard_output_paths_mcp(base_output_directory) (AST parsed) - *"Get standard output paths for the pipeline. Exposed via MCP."def register_tools(mcp_instance) (AST parsed) - *"Register setup utility tools with the MCP."tests (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/tests)mcp.py Status: Found (7083 bytes)def get_test_report(uri) (AST parsed) - *"Retrieve a test report by URI."def register_tools(mcp) (AST parsed) - *"Register test tools with the MCP."def run_gnn_type_checker(file_path) - *Description: "Run the GNN type checker on a specific file (via test module)."json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to check"
}
}def run_gnn_type_checker_on_directory(dir_path, report_file) - *Description: "Run the GNN type checker on all GNN files in a directory (via test module)."json
{
"dir_path": {
"type": "string",
"description": "Path to directory containing GNN files"
},
"report_file": {
"type": "string",
"description": "Optional path to save the report"
}
}def run_gnn_unit_tests() - *Description: "Run the GNN unit tests and return results."def run_type_checker_on_directory(dir_path, report_file) (AST parsed) - *"Run the GNN type checker on a directory of files."def run_type_checker_on_file(file_path) (AST parsed) - *"Run the GNN type checker on a file."def run_unit_tests() (AST parsed) - *"Run the GNN unit tests."visualization (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/visualization)mcp.py Status: Found (5934 bytes)def get_visualization_results(uri) (AST parsed) - *"Retrieve visualization results by URI."def parse_gnn_file(file_path) - *Description: "Parse a GNN file without visualization"json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to parse"
}
}def register_tools(mcp) (AST parsed) - *"Register visualization tools with the MCP."def visualize_directory(dir_path, output_dir) (AST parsed) - *"Visualize all GNN files in a directory through MCP."def visualize_file(file_path, output_dir) (AST parsed) - *"Visualize a GNN file through MCP."def visualize_gnn_directory(dir_path, output_dir) - *Description: "Visualize all GNN files in a directory"json
{
"dir_path": {
"type": "string",
"description": "Path to directory containing GNN files"
},
"output_dir": {
"type": "string",
"description": "Optional output directory"
}
}def visualize_gnn_file(file_path, output_dir) - *Description: "Generate visualizations for a specific GNN file."json
{
"file_path": {
"type": "string",
"description": "Path to the GNN file to visualize"
},
"output_dir": {
"type": "string",
"description": "Optional output directory"
}
}llm (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/llm)mcp.py Status: Found (19238 bytes)def ensure_llm_tools_registered(mcp_instance_ref) (AST parsed) - *"Ensures that LLM tools are registered with the provided MCP instance."def explain_gnn_file_content(file_path_str, aspect_to_explain) (AST parsed) - *"Reads a GNN file, sends its content to an LLM, and returns an explanation."def generate_professional_summary_from_gnn(file_path_str, experiment_details, target_audience) (AST parsed) - *"Generates a professional summary of a GNN model and its experimental context."def initialize_llm_module(mcp_instance_ref) (AST parsed) - *"Initializes the LLM module, loads API key, and updates MCP status."def llm.explain_gnn_file(file_path_str, aspect_to_explain) - *Description: "Reads a GNN specification file and uses an LLM to generate an explanation of its content. Can focus on a specific aspect if provided."json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file."
},
"aspect_to_explain": {
"type": "string",
"description": "(Optional) A specific part or concept within the GNN to focus the explanation on."
}
},
"required": [
"file_path_str"
]
}def llm.generate_professional_summary(file_path_str, experiment_details, target_audience) - *Description: "Reads a GNN file and optional experiment details, then uses an LLM to generate a professional summary suitable for reports or papers."json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file."
},
"experiment_details": {
"type": "string",
"description": "(Optional) Text describing the experiments conducted with the model, including setup, results, or observations."
},
"target_audience": {
"type": "string",
"description": "(Optional) The intended audience for the summary (e.g., 'fellow researchers', 'project managers'). Default: 'fellow researchers'."
}
},
"required": [
"file_path_str"
]
}def llm.summarize_gnn_file(file_path_str, user_prompt_suffix) - *Description: "Reads a GNN specification file and uses an LLM to generate a concise summary of its content. Optionally, a user prompt suffix can refine the summary focus."json
{
"type": "object",
"properties": {
"file_path_str": {
"type": "string",
"description": "The absolute or relative path to the GNN file (.md, .gnn.md, .json)."
},
"user_prompt_suffix": {
"type": "string",
"description": "(Optional) Additional instructions or focus points for the summary."
}
},
"required": [
"file_path_str"
]
}def register_tools(mcp_instance_ref) (AST parsed)def summarize_gnn_file_content(file_path_str, user_prompt_suffix) (AST parsed) - *"Reads a GNN file, sends its content to an LLM, and returns a summary."mcp.py Integrations Found: 8/8mcp.py integration file.
Please ensure each functional module that should be exposed via MCP has its own mcp.py following the project's MCP architecture.��️ Report Generated: 2025-06-06 12:52:38
🎯 GNN Source Directory: src/gnn/examples
📖 Ontology Terms Definition: src/ontology/act_inf_ontology_terms.json (Loaded: 48 terms)
src/gnn/examples/pymdp_pomdp_agent.mdA_m0 -> LikelihoodMatrixModality0A_m1 -> LikelihoodMatrixModality1A_m2 -> LikelihoodMatrixModality2B_f0 -> TransitionMatrixFactor0B_f1 -> TransitionMatrixFactor1C_m0 -> LogPreferenceVectorModality0C_m1 -> LogPreferenceVectorModality1C_m2 -> LogPreferenceVectorModality2D_f0 -> PriorOverHiddenStatesFactor0D_f1 -> PriorOverHiddenStatesFactor1s_f0 -> HiddenStateFactor0s_f1 -> HiddenStateFactor1s_prime_f0 -> NextHiddenStateFactor0s_prime_f1 -> NextHiddenStateFactor1o_m0 -> ObservationModality0o_m1 -> ObservationModality1o_m2 -> ObservationModality2π_f1 -> PolicyVectorFactor1u_f1 -> ActionFactor1G -> ExpectedFreeEnergyValidation Summary: All ontological terms are recognized.
src/gnn/examples/rxinfer_multiagent_gnn.mddt -> TimeStep (INVALID TERM)gamma -> ConstraintParameter (INVALID TERM)nr_steps -> TrajectoryLength (INVALID TERM)nr_iterations -> InferenceIterations (INVALID TERM)nr_agents -> NumberOfAgents (INVALID TERM)softmin_temperature -> SoftminTemperature (INVALID TERM)A -> StateTransitionMatrix (INVALID TERM)B -> ControlInputMatrix (INVALID TERM)C -> ObservationMatrix (INVALID TERM)initial_state_variance -> InitialStateVariance (INVALID TERM)control_variance -> ControlVariance (INVALID TERM)goal_constraint_variance -> GoalConstraintVariance (INVALID TERM)Validation Summary: 12 unrecognized ontological term(s) found.
{
"model_purpose": "The GNN file represents a Multifactor PyMDP (Partially Observable Markov Decision Process) agent designed for Active Inference, capable of processing multiple observation modalities and managing hidden state factors. It aims to infer states, policies, and actions based on observations and control factors.",
"key_components": {
"hidden_states": {
"factors": {
"reward_level": {
"num_states": 2,
"description": "Represents the level of reward as a hidden state."
},
"decision_state": {
"num_states": 3,
"description": "Represents the current decision-making state of the agent."
}
}
},
"observations": {
"modalities": {
"state_observation": {
"num_outcomes": 3,
"description": "Observations related to the agent's current state."
},
"reward": {
"num_outcomes": 3,
"description": "Observations related to the reward received by the agent."
},
"decision_proprioceptive": {
"num_outcomes": 3,
"description": "Observations related to the agent's decision-making process."
}
}
},
"actions": {
"decision_state": {
"num_actions": 3,
"description": "Actions that the agent can take to influence its decision state."
}
},
"control": {
"description": "The decision state factor is controllable, allowing the agent to take actions based on its policy."
}
},
"component_interactions": {
"hidden_states_to_observations": "Hidden states influence the likelihood of observations through A_m matrices.",
"observations_to_policy": "Observations are used to compute expected free energy and inform policy decisions.",
"policy_to_action": "The policy distribution over actions directly influences the action taken by the agent.",
"state_transitions": "The state transitions are defined by B_f matrices, which govern how hidden states evolve based on actions."
},
"data_types_and_dimensions": {
"A_matrices": {
"type": "float",
"dimensions": {
"state_observation": "[3, 2, 3]",
"reward": "[3, 2, 3]",
"decision_proprioceptive": "[3, 2, 3]"
}
},
"B_matrices": {
"type": "float",
"dimensions": {
"reward_level": "[2, 2, 1]",
"decision_state": "[3, 3, 3]"
}
},
"C_vectors": {
"type": "float",
"dimensions": {
"state_observation": "[3]",
"reward": "[3]",
"decision_proprioceptive": "[3]"
}
},
"D_vectors": {
"type": "float",
"dimensions": {
"reward_level": "[2]",
"decision_state": "[3]"
}
},
"hidden_states": {
"dimensions": {
"reward_level": "[2, 1]",
"decision_state": "[3, 1]"
}
},
"observations": {
"dimensions": {
"state_observation": "[3, 1]",
"reward": "[3, 1]",
"decision_proprioceptive": "[3, 1]"
}
},
"policy": {
"dimensions": "[3]"
},
"action": {
"dimensions": "[1]"
},
"expected_free_energy": {
"dimensions": "[1]"
},
"time_step": {
"dimensions": "[1]"
}
},
"potential_applications": [
"Developing intelligent agents for decision-making in complex environments.",
"Simulation of adaptive behaviors in robotics.",
"Modeling cognitive processes in psychology and neuroscience.",
"Optimizing resource allocation in uncertain environments."
],
"limitations_or_ambiguities": [
"The initial parameterization and specific values of matrices may need empirical validation.",
"The model's performance can vary based on the accuracy of the observation modalities and state transitions.",
"The unbounded model time horizon may complicate certain implementations or simulations."
],
"ontology_mapping_assessment": {
"presence_of_terms": true,
"relevance": "The ActInfOntology terms are relevant and accurately represent the components of the model, facilitating alignment with existing frameworks in Active Inference."
}
}pymdp_pomdp_agent_comprehensive_analysis.json[
{
"question": "What are the implications of having multiple observation modalities in the PyMDP agent, and how might they interact with the hidden state factors?",
"answer": "The GNN file indicates that the PyMDP agent has multiple observation modalities, specifically \"state_observation,\" \"reward,\" and \"decision_proprioceptive,\" each with three possible outcomes. The implications of having these multiple modalities are as follows:\n\n1. **Richness of Information**: Each modality can provide distinct and complementary information about the environment, allowing the agent to make more informed decisions based on varied inputs.\n\n2. **Interaction with Hidden States**: The observation modalities are linked to the hidden state factors, \"reward_level\" and \"decision_state.\" The likelihood matrices (A_m0, A_m1, A_m2) define how observations relate to the hidden states. This implies that changes in hidden states can affect the probabilities of observations, and vice versa, creating a dynamic interaction where observations can help infer hidden states.\n\n3. **Policy Inference**: The presence of multiple observation modalities can influence the policy (\u03c0_f1) derived from the agent's internal model. The preferences (C_m0, C_m1, C_m2) associated with each modality can guide the agent's decision-making process, allowing it to weigh various observations differently based on their relevance to the hidden state factors.\n\nOverall, the interaction between multiple observation modalities and hidden state factors enhances the agent's ability to adapt and optimize its actions based on a broader set of environmental cues."
},
{
"question": "How does the controllability of the 'decision_state' factor influence the agent's decision-making process and overall performance?",
"answer": "The GNN file does not provide enough information to explicitly detail how the controllability of the 'decision_state' factor influences the agent's decision-making process and overall performance. While it mentions that the 'decision_state' factor is controllable with 3 possible actions, there are no specific insights or metrics related to its impact on decision-making or performance outcomes."
},
{
"question": "What assumptions are made regarding the transitions defined in the B_f0 and B_f1 matrices, particularly in terms of the independence of actions and states?",
"answer": "The GNN file indicates that the transitions defined in the B_f0 and B_f1 matrices assume a level of independence between actions and states. Specifically:\n\n- **B_f0**: This matrix is defined for a hidden state factor with 2 states and 1 implicit (uncontrolled) action. The transitions are represented as an identity matrix, suggesting that the next state (s_next) depends solely on the previous state (s_prev) and is unaffected by actions, which are not explicitly included.\n\n- **B_f1**: This matrix is defined for a hidden state factor with 3 states and 3 actions. Each action's effect is described using separate identity matrices for each action, implying that the transitions for each next state depend only on the previous state and the chosen action, without any interaction or dependence on the state itself.\n\nThus, the assumptions made are that transitions for B_f0 are independent of actions, while for B_f1, transitions are conditioned on actions but still independent of the underlying states when determining the next state."
},
{
"question": "How do the preferences set in the C_m1 vector impact the agent's behavior, especially in terms of decision-making and policy formulation?",
"answer": "The preferences set in the C_m1 vector directly influence the agent's behavior by affecting its decision-making and policy formulation. Specifically, C_m1, which contains the log preferences for modality 1 (reward), includes a value of 1.0 for the first observation, -2.0 for the second, and 0.0 for the third. \n\nThese values suggest that the agent has a strong preference for the first observation (reward), a negative preference for the second (indicating aversion), and a neutral stance towards the third. As a result, when formulating policies, the agent will likely prioritize actions that lead to outcomes corresponding to the first observation, while avoiding actions leading to the second observation due to its negative preference. This differential weighting in C_m1 will thus shape the agent's action selection process, guiding it towards maximizing expected rewards while minimizing unfavorable outcomes. \n\nOverall, C_m1 plays a crucial role in shaping the agent's policy vector (\u03c0_f1), ultimately influencing the actions the agent chooses to take in its environment."
},
{
"question": "What is the significance of the uniform priors defined in the D_f0 and D_f1 vectors, and how might different prior distributions alter the agent's inference and learning?",
"answer": "The uniform priors defined in the D_f0 and D_f1 vectors indicate that the agent starts with no prior preference or bias towards any particular hidden state in both factors. This means that all states are considered equally likely at the beginning of the agent's operation. \n\nIf different prior distributions were used\u2014such as biased priors favoring certain states\u2014this would affect the agent's inference and learning by influencing the initial beliefs about the hidden states. For instance, a prior that heavily favors one state over others might lead the agent to converge more quickly towards that state, potentially skewing its learning and decision-making processes. Such biases could result in faster learning in scenarios where the favored state is indeed correct, but could also lead to suboptimal performance if the favored state is incorrect or misleading. Thus, the choice of prior distribution is crucial in shaping the agent's learning trajectory and its ability to adapt to the environment."
}
]pymdp_pomdp_agent_qa.json### Summary of the Multifactor PyMDP Agent GNN Model
**Model Name:** Multifactor PyMDP Agent v1
**Purpose:** The model represents a PyMDP (Partially Observable Markov Decision Process) agent that utilizes multiple observation modalities and hidden state factors to make decisions based on observed states and preferences, implemented in the Active Inference framework.
**Key Components:**
1. **Observation Modalities:**
- **State Observation:** 3 outcomes
- **Reward:** 3 outcomes
- **Decision Proprioceptive:** 3 outcomes
2. **Hidden State Factors:**
- **Reward Level:** 2 states
- **Decision State:** 3 states
- The decision state factor is controllable with 3 possible actions.
3. **State Transition and Observation Likelihood Matrices:**
- **A_m0, A_m1, A_m2:** Matrices defining the likelihood of observations given the hidden states for each modality.
- **B_f0, B_f1:** Transition matrices for hidden state factors, with B_f0 being uncontrolled and B_f1 being controlled by actions.
4. **Preference Vectors and Priors:**
- **C_m0, C_m1, C_m2:** Preference vectors for each modality, influencing the expected free energy.
- **D_f0, D_f1:** Priors over the hidden states for each factor, initialized uniformly.
**Main Connections:**
- The relationships among the hidden states, observations, and control actions are defined through a series of connections:
- The priors connect to the hidden states.
- Hidden states influence the observation likelihood matrices (A_m).
- Observations drive the calculations of expected free energy (G), which in turn influences the policy distribution (π_f1).
- The chosen action (u_f1) for the controllable factor is derived from the policy distribution.
This model exemplifies a structured approach to decision-making in environments characterized by uncertainty and multiple simultaneous factors.pymdp_pomdp_agent_summary.txt### Summary of the GNN Model: Multi-agent Trajectory Planning
**Model Name:** Multi-agent Trajectory Planning
**Purpose:**
This model is designed to simulate trajectory planning for multiple agents in a 2D environment using RxInfer.jl. It incorporates various constraints, including obstacle avoidance, goal-directed behavior, and inter-agent collision avoidance, making it suitable for complex environments with obstacles.
**Key Components:**
1. **State Space Model:**
- **Parameters:**
- Time step (`dt`), constraint parameter (`gamma`), number of time steps (`nr_steps`), number of agents (`nr_agents`), and softmin temperature.
- **Matrices:**
- State transition matrix (`A`), control input matrix (`B`), and observation matrix (`C`).
- **Prior Distributions:**
- Variances for initial states, control inputs, and goal constraints.
2. **Environment Definitions:**
- Obstacles are defined for different scenarios, including door and wall environments, with parameters for center and size.
3. **Agent Configurations:**
- Each agent is defined with an ID, radius, initial position, and target position. Four agents are configured in total, each with distinct properties.
4. **Experiment Configurations:**
- Includes random seeds for reproducibility and file paths for saving results and visualizations.
**Main Connections:**
- The model parameters interact with the state space model to generate agent trajectories.
- The agent trajectories are utilized to enforce goal constraints and obstacle avoidance.
- The planning system integrates the outcomes of goal-directed behavior, obstacle avoidance, and collision avoidance to achieve effective trajectory planning.
This GNN model serves as a foundational structure for simulating and analyzing multi-agent interactions in trajectory planning scenarios, addressing both dynamic environment challenges and agent-specific behaviors.rxinfer_multiagent_gnn_summary.txt